moral foundation
Building Resilient Information Ecosystems: Large LLM-Generated Dataset of Persuasion Attacks
Kao, Hsien-Te, Panasyuk, Aleksey, Bautista, Peter, Dupree, William, Ganberg, Gabriel, Beaubien, Jeffrey M., Cassani, Laura, Volkova, Svitlana
Organization's communication is essential for public trust, but the rise of generative AI models has introduced significant challenges by generating persuasive content that can form competing narratives with official messages from government and commercial organizations at speed and scale. This has left agencies in a reactive position, often unaware of how these models construct their persuasive strategies, making it more difficult to sustain communication effectiveness. In this paper, we introduce a large LLM-generated persuasion attack dataset, which includes 134,136 attacks generated by GPT-4, Gemma 2, and Llama 3.1 on agency news. These attacks span 23 persuasive techniques from SemEval 2023 Task 3, directed toward 972 press releases from ten agencies. The generated attacks come in two mediums, press release statements and social media posts, covering both long-form and short-form communication strategies. We analyzed the moral resonance of these persuasion attacks to understand their attack vectors. GPT-4's attacks mainly focus on Care, with Authority and Loyalty also playing a role. Gemma 2 emphasizes Care and Authority, while Llama 3.1 centers on Loyalty and Care. Analyzing LLM-generated persuasive attacks across models will enable proactive defense, allow to create the reputation armor for organizations, and propel the development of both effective and resilient communications in the information ecosystem.
- Press Release (0.55)
- Research Report (0.50)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Beyond Human Judgment: A Bayesian Evaluation of LLMs' Moral Values Understanding
Skorski, Maciej, Landowska, Alina
How do Large Language Models understand moral dimensions compared to humans? This first large-scale Bayesian evaluation of market-leading language models provides the answer. In contrast to prior work using deterministic ground truth (majority or inclusion rules), we model annotator disagreements to capture both aleatoric uncertainty (inherent human disagreement) and epistemic uncertainty (model domain sensitivity). We evaluated the best language models (Claude Sonnet 4, DeepSeek-V3, Llama 4 Maverick) across 250K+ annotations from nearly 700 annotators in 100K+ texts spanning social networks, news and forums. Our GPU-optimized Bayesian framework processed 1M+ model queries, revealing that AI models typically rank among the top 25\% of human annotators, performing much better than average balanced accuracy. Importantly, we find that AI produces far fewer false negatives than humans, highlighting their more sensitive moral detection capabilities.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe > France (0.04)
- Europe > Poland (0.04)
- (2 more...)
- Health & Medicine (0.68)
- Information Technology > Services (0.48)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- (2 more...)
Differences in the Moral Foundations of Large Language Models
Large language models are increasingly being used in critical domains of politics, business, and education, but the nature of their normative ethical judgment remains opaque. Alignment research has, to date, not sufficiently utilized perspectives and insights from the field of moral psychology to inform training and evaluation of frontier models. I perform a synthetic experiment on a wide range of models from most major model providers using Jonathan Haidt's influential moral foundations theory (MFT) to elicit diverse value judgments from LLMs. Using multiple descriptive statistical approaches, I document the bias and variance of large language model responses relative to a human baseline in the original survey. My results suggest that models rely on different moral foundations from one another and from a nationally representative human baseline, and these differences increase as model capabilities increase. This work seeks to spur further analysis of LLMs using MFT, including finetuning of open-source models, and greater deliberation by policymakers on the importance of moral foundations for LLM alignment.
Modeling Political Discourse with Sentence-BERT and BERTopic
Mendonca, Margarida, Figueira, Alvaro
Social media has reshaped political discourse, offering politicians a platform for direct engagement while reinforcing polarization and ideological divides. This study introduces a novel topic evolution framework that integrates BERTopic-based topic modeling with Moral Foundations Theory (MFT) to analyze the longevity and moral dimensions of political topics in Twitter activity during the 117th U.S. Congress. We propose a methodology for tracking dynamic topic shifts over time and measuring their association with moral values and quantifying topic persistence. Our findings reveal that while overarching themes remain stable, granular topics tend to dissolve rapidly, limiting their long-term influence. Moreover, moral foundations play a critical role in topic longevity, with Care and Loyalty dominating durable topics, while partisan differences manifest in distinct moral framing strategies. This work contributes to the field of social network analysis and computational political discourse by offering a scalable, interpretable approach to understanding moral-driven topic evolution on social media.
- Asia > Middle East > Israel (0.05)
- Europe > Ukraine (0.05)
- Europe > Russia (0.05)
- (7 more...)
- Information Technology > Services (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Voting & Elections (0.93)
Fairness Metric Design Exploration in Multi-Domain Moral Sentiment Classification using Transformer-Based Models
Naranbat, Battemuulen, Ziabari, Seyed Sahand Mohammadi, Husaini, Yousuf Nasser Al, Alsahag, Ali Mohammed Mansoor
Ensuring fairness in natural language processing for moral sentiment classification is challenging, particularly under cross-domain shifts where transformer models are increasingly deployed. Using the Moral Foundations Twitter Corpus (MFTC) and Moral Foundations Reddit Corpus (MFRC), this work evaluates BERT and DistilBERT in a multi-label setting with in-domain and cross-domain protocols. Aggregate performance can mask disparities: we observe pronounced asymmetry in transfer, with Twitter->Reddit degrading micro-F1 by 14.9% versus only 1.5% for Reddit->Twitter. Per-label analysis reveals fairness violations hidden by overall scores; notably, the authority label exhibits Demographic Parity Differences of 0.22-0.23 and Equalized Odds Differences of 0.40-0.41. To address this gap, we introduce the Moral Fairness Consistency (MFC) metric, which quantifies the cross-domain stability of moral foundation detection. MFC shows strong empirical validity, achieving a perfect negative correlation with Demographic Parity Difference (rho = -1.000, p < 0.001) while remaining independent of standard performance metrics. Across labels, loyalty demonstrates the highest consistency (MFC = 0.96) and authority the lowest (MFC = 0.78). These findings establish MFC as a complementary, diagnosis-oriented metric for fairness-aware evaluation of moral reasoning models, enabling more reliable deployment across heterogeneous linguistic contexts. .
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Europe > Ireland > Leinster > County Dublin > Dublin (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
MoVa: Towards Generalizable Classification of Human Morals and Values
Chen, Ziyu, Sun, Junfei, Li, Chenxi, Nguyen, Tuan Dung, Yao, Jing, Yi, Xiaoyuan, Xie, Xing, Tan, Chenhao, Xie, Lexing
Identifying human morals and values embedded in language is essential to empirical studies of communication. However, researchers often face substantial difficulty navigating the diversity of theoretical frameworks and data available for their analysis. Here, we contribute MoVa, a well-documented suite of resources for generalizable classification of human morals and values, consisting of (1) 16 labeled datasets and benchmarking results from four theoretically-grounded frameworks; (2) a lightweight LLM prompting strategy that outperforms fine-tuned models across multiple domains and frameworks; and (3) a new application that helps evaluate psychological surveys. In practice, we specifically recommend a classification strategy, all@once, that scores all related concepts simultaneously, resembling the well-known multi-label classifier chain. The data and methods in MoVa can facilitate many fine-grained interpretations of human and machine communication, with potential implications for the alignment of machine behavior.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Washington > King County > Seattle (0.04)
- North America > Mexico > Mexico City > Mexico City (0.04)
- (10 more...)
- Research Report > Experimental Study (1.00)
- Research Report > New Finding (0.67)
- Law > Civil Rights & Constitutional Law (0.67)
- Health & Medicine > Therapeutic Area > Vaccines (0.45)
- Government > Regional Government > North America Government > United States Government (0.45)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- (2 more...)
Mapping the Course for Prompt-based Structured Prediction
Pauk, Matt, Pacheco, Maria Leonor
LLMs have been shown to be useful for a variety of language tasks, without requiring task-specific fine-tuning. However, these models often struggle with hallucinations and complex reasoning problems due to their autoregressive nature. We propose to address some of these issues, specifically in the area of structured prediction, by combining LLMs with combinatorial inference in an attempt to marry the predictive power of LLMs with the structural consistency provided by inference methods. We perform exhaustive experiments in an effort to understand which prompting strategies can effectively estimate LLM confidence values for use with symbolic inference, and show that, regardless of the prompting strategy, the addition of symbolic inference on top of prompting alone leads to more consistent and accurate predictions. Additionally, we show that calibration and fine-tuning using structured prediction objectives leads to increased performance for challenging tasks, showing that structured learning is still valuable in the era of LLMs.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Colorado > Boulder County > Boulder (0.04)
- (12 more...)
The Moral Gap of Large Language Models
Skorski, Maciej, Landowska, Alina
MFT has found numerous aplications, including analysis of political ideology (Graham et al., 2009), environmental attitudes (Fein-berg and Willer, 2013), vaccine hesitancy (Amin et al., 2017), social "Everyone deserves equal access to healthcare regardless of income" Fairness "Respect your elders and follow traditional "Stand with our troops - they sacrifice everything for our freedom" Loyalty "Marriage is sacred and should be protected The advent of deep learning and particularly transformer architectures marked a significant advancement in moral content analysis. Hoover et al. (2020) first applied deep learning models to moral The recent proposal of applying LLMs to moral content categorization (Bulla et al., 2025) showed promise but suffered from methodological limitations.
- Europe > Germany > Bremen > Bremen (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Health & Medicine > Therapeutic Area > Vaccines (0.54)
- Health & Medicine > Therapeutic Area > Immunology (0.34)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
"Just a strange pic": Evaluating 'safety' in GenAI Image safety annotation tasks from diverse annotators' perspectives
Wang, Ding, Díaz, Mark, Rastogi, Charvi, Davani, Aida, Prabhakaran, Vinodkumar, Mishra, Pushkar, Patel, Roma, Parrish, Alicia, Ashwood, Zoe, Paganini, Michela, Teh, Tian Huey, Rieser, Verena, Aroyo, Lora
Understanding what constitutes safety in AI-generated content is complex. While developers often rely on predefined taxonomies, real-world safety judgments also involve personal, social, and cultural perceptions of harm. This paper examines how annotators evaluate the safety of AI-generated images, focusing on the qualitative reasoning behind their judgments. Analyzing 5,372 open-ended comments, we find that annotators consistently invoke moral, emotional, and contextual reasoning that extends beyond structured safety categories. Many reflect on potential harm to others more than to themselves, grounding their judgments in lived experience, collective risk, and sociocultural awareness. Beyond individual perceptions, we also find that the structure of the task itself -- including annotation guidelines -- shapes how annotators interpret and express harm. Guidelines influence not only which images are flagged, but also the moral judgment behind the justifications. Annotators frequently cite factors such as image quality, visual distortion, and mismatches between prompt and output as contributing to perceived harm dimensions, which are often overlooked in standard evaluation frameworks. Our findings reveal that existing safety pipelines miss critical forms of reasoning that annotators bring to the task. We argue for evaluation designs that scaffold moral reflection, differentiate types of harm, and make space for subjective, context-sensitive interpretations of AI-generated content.
- Africa > Eswatini > Manzini > Manzini (0.04)
- North America > United States > Hawaii (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- (3 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Media (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.68)
The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach
Coleman, Chad, Neuman, W. Russell, Dasdan, Ali, Ali, Safinah, Shah, Manan
As large language models (LLMs) are increasingly deployed in consequential decision - making contexts, systematically assessing their ethical reasoning capabilities becomes a critical imperative. This paper introduces the Priorities in Reasoning and Intrinsi c Moral Evaluation (PRIME) framework -- a comprehensive methodology for analyzing moral priorities across foundational ethical dimensions including consequentialist - deontological reasoning, moral foundations theory, and Kohlberg's developmental stages. We app ly this framework to six leading LLMs through a dual - protocol approach combining direct questioning and response analysis to established ethical dilemmas. Our analysis reveals striking patterns of convergence: all evaluated models demonstrate strong priori tization of care/harm and fairness/cheating foundations while consistently underweighting authority, loyalty, and sanctity dimensions. Through detailed examination of confidence metrics, response reluctance patterns, and reasoning consistency, we establish that contemporary LLMs (1) produce decisive ethical judgments, (2) demonstrate notable cross - model alignment in moral decision - making, and (3) generally correspond with empirically established human moral preferences. This research contributes a scalable, extensible methodology for ethical benchmarking while highlighting both the promising capabilities and systematic limitations in current AI moral reasoning architectures -- insights critical for responsible development as these systems assume increasingly si gnificant societal roles. The rapid evolution of generative large language models (LLMs) has brought the alignment issue to the forefront of AI ethics discussions - specifically, whether these models are appropriately aligned with human values (Bostrom, 2014; Tegmark 2017; Russell 2019; Kosinski, 2024). As these powerful models are increasingly integrated into decision - making processes across various societal domains (Salazar, A., & Kunc, M., 2025), understanding whether and how their operational logic aligns with fundamental human values becomes not just an academic question, but a critical societal imperative. In this paper we will present an analytical framework and findings to address the first two questions, and a preliminary exploratory analysis of the third. We will make the case that the answers to these questions are: yes, yes and yes. There are caveats and exceptions, of course, but the broad pattern, we believe, is clear. Our methodology permits us to explore not just what choices they make, but the reasoning chain of thought that leads to those decisions.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (4 more...)
- Research Report > New Finding (0.68)
- Research Report > Experimental Study (0.46)